26 research outputs found

    Robust multi-camera tracking from schematic descriptions

    Get PDF
    Although monocular 2D tracking has been largely studied in the literature, it suffers from some inherent problems, mainly when handling persistent occlusions, that limit its performance in practical situations. Tracking methods combining observations from multiple cameras seem to solve these problems. However, most multi-camera systems require detailed information from each view, making it impossible their use in real networks with low transmission rate. In this paper, we present a robust multi-camera 3D tracking method which works on schematic descriptions of the observations performed by each camera of the system, allowing thus its performance in real surveillance networks. It is based on unspecific 2D detection systems working independently in each camera, whose results are smartly combined by means of a Bayesian association method based on geometry and color, allowing the 3D tracking of the objects of the scene with a Particle Filter. The tests performed show the excellent performance of the system, even correcting possible failures of the 2D processing modules

    Kernel bandwidth estimation for moving object detection in non-stabilized cameras

    Get PDF
    The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to ste-reo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences

    Robust 3D multi-camera tracking from 2D mono-camera tracks by bayesian association

    Get PDF
    Visual tracking of people is essential automatic scene understanding and surveillance of areas of interest. Monocular 2D tracking has been largely studied, but it usually provides inadequate information for event nterpretation, and also proves insufficiently robust, due to view-point limitations (occlusions, etc.). In this paper, we present a light but automatic and robust 3D tracking method using multiple calibrated cameras. It is based on off-the-shelf 2D tracking systems running independently in each camera of the system, combined using Bayesian association of the monocular tracks. The proposed system shows excellent results even in challenging situations, proving itself able to automatically boost and recover from possible errors

    Simultaneous 3D object tracking and camera parameter estimation by Bayesian methods and transdimensional MCMC sampling

    Get PDF
    Multi-camera 3D tracking systems with overlapping cameras represent a powerful mean for scene analysis, as they potentially allow greater robustness than monocular systems and provide useful 3D information about object location and movement. However, their performance relies on accurately calibrated camera networks, which is not a realistic assumption in real surveillance environments. Here, we introduce a multi-camera system for tracking the 3D position of a varying number of objects and simultaneously refin-ing the calibration of the network of overlapping cameras. Therefore, we introduce a Bayesian framework that combines Particle Filtering for tracking with recursive Bayesian estimation methods by means of adapted transdimensional MCMC sampling. Addi-tionally, the system has been designed to work on simple motion detection masks, making it suitable for camera networks with low transmission capabilities. Tests show that our approach allows a successful performance even when starting from clearly inaccurate camera calibrations, which would ruin conventional approaches

    Capabilities and limitations of mono-camera pedestrian-based autocalibration

    Get PDF
    Many environments lack enough architectural information to allow an autocalibration based on features extracted from the scene structure. Nevertheless, the observation over time of walking people can generally be used to estimate the vertical vanishing point and the horizon line in the acquired image. However, this information is not enough to allow the calibration of a general camera without presuming excessive simplifications. This paper presents a study on the capabilities and limitations of the mono-camera calibration methods based solely on the knowledge of the vertical vanishing point and the horizon line in the image. The mathematical analysis sets the conditions to assure the feasibility of the mono-camera pedestrian-based autocalibration. In addition, examples of applications are presented and discusse

    Camera localization using trajectories and maps

    Get PDF
    We propose a new Bayesian framework for automatically determining the position (location and orientation) of an uncalibrated camera using the observations of moving objects and a schematic map of the passable areas of the environment. Our approach takes advantage of static and dynamic information on the scene structures through prior probability distributions for object dynamics. The proposed approach restricts plausible positions where the sensor can be located while taking into account the inherent ambiguity of the given setting. The proposed framework samples from the posterior probability distribution for the camera position via data driven MCMC, guided by an initial geometric analysis that restricts the search space. A Kullback-Leibler divergence analysis is then used that yields the final camera position estimate, while explicitly isolating ambiguous settings. The proposed approach is evaluated in synthetic and real environments, showing its satisfactory performance in both ambiguous and unambiguous settings

    Statistical moving object detection for mobile devices with camera

    Get PDF
    A novel and high-quality system for moving object detection in sequences recorded with moving cameras is proposed. This system is based on the collaboration between an automatic homography estimation module for image alignment, and a robust moving object detection using an efficient spatiotemporal nonparametric background modeling

    Versatile Bayesian classifier for moving object detection by non-parametric background-foreground modeling

    Get PDF
    Along the recent years, several moving object detection strategies by non-parametric background-foreground modeling have been proposed. To combine both models and to obtain the probability of a pixel to belong to the foreground, these strategies make use of Bayesian classifiers. However, these classifiers do not allow to take advantage of additional prior information at different pixels. So, we propose a novel and efficient alternative Bayesian classifier that is suitable for this kind of strategies and that allows the use of whatever prior information. Additionally, we present an effective method to dynamically estimate prior probability from the result of a particle filter-based tracking strategy

    Adaptable Bayesian classifier for spatiotemporal nonparametric moving object detection strategies

    Get PDF
    Electronic devices endowed with camera platforms require new and powerful machine vision applications, which commonly include moving object detection strategies. To obtain high-quality results, the most recent strategies estimate nonparametrically background and foreground models and combine them by means of a Bayesian classifier. However, typical classifiers are limited by the use of constant prior values and they do not allow the inclusion of additional spatiodependent prior information. In this Letter, we propose an alternative Bayesian classifier that, unlike those reported before, allows the use of additional prior information obtained from any source and depending on the spatial position of each pixel

    3D Tracking Using Multi-view Based Particle Filters

    Get PDF
    Visual surveillance and monitoring of indoor environments using multiple cameras has become a field of great activity in computer vision. Usual 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using geometrical relationships across cameras. As 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions), 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. To overcome this problem, this paper proposes a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This method allows to estimate the probability of a certain volume being occupied by a moving object, and thus to segment and track multiple people across the monitored area. The proposed method is developed on the basis of simple, binary 2D moving region segmentation on each camera, considered as different state observations. In addition, the method is proved well suited for integrating additional 2D low-level cues to increase system robustness to occlusions: in this line, a naĂŻve color-based (HSI) appearance model has been integrated, resulting in clear performance improvements when dealing with complex scenarios
    corecore